Numerical control (NC) refers to the automation of machine tools that are operated by abstractly programmed commands encoded on a storage medium, as opposed to controlled manually via handwheels or levers, or mechanically automated via cams alone. The first NC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the controls to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern computer numerical control (CNC) machine tools that have revolutionized the machining processes.
In modern CNC systems, end-to-end component design is highly automated using computer-aided design (CAD) and computer-aided manufacturing (CAM) programs. The programs produce a computer file that is interpreted to extract the commands needed to operate a particular machine via a postprocessor, and then loaded into the CNC machines for production. Since any particular component might require the use of a number of different tools-drills, saws, etc., modern machines often combine multiple tools into a single "cell". In other cases, a number of different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the complex series of steps needed to produce any part is highly automated and produces a part that closely matches the original CAD design.
Contents
|
The automation of machine tool control began in the 19th century with cams that "played" a machine tool in the way that cams had long been playing musical boxes or operating elaborate cuckoo clocks. Thomas Blanchard built his gun-stock-copying lathes (1820s–30s), and the work of people such as Christopher Miner Spencer developed the turret lathe into the screw machine (1870s). Cam-based automation had already reached a highly advanced state by World War I (1910s).
However, automation via cams is fundamentally different from numerical control because it cannot be abstractly programmed. Cams can encode information, but getting the information from the abstract level of an engineering drawing into the cam is a manual process that requires sculpting and/or machining and filing.
Various forms of abstractly programmable control had existed during the 19th century: those of the Jacquard loom, player pianos, and mechanical computers pioneered by Charles Babbage and others. These developments had the potential for convergence with the automation of machine tool control starting in that century, but the convergence did not happen until many decades later.
The application of hydraulics to cam-based automation resulted in tracing machines that used a stylus to trace a template, such as the enormous Pratt & Whitney "Keller Machine", which could copy templates several feet across.[1] Another approach was "record and playback", pioneered at General Motors (GM) in the 1950s, which used a storage system to record the movements of a human machinist, and then play them back on demand. Analogous systems are common even today, notably the "teaching lathe" which gives new machinists a hands-on feel for the process. None of these were numerically programmable, however, and required a master machinist at some point in the process, because the "programming" was physical rather than numerical.
One barrier to complete automation was the required tolerances of the machining process, which are routinely on the order of thousandths of an inch. Although connecting some sort of control to a storage device like punched cards was easy, ensuring that the controls were moved to the correct position with the required accuracy was another issue. The movement of the tool resulted in varying forces on the controls that would mean a linear input would not result in linear tool motion. The key development in this area was the introduction of the servomechanism, which produced highly accurate measurement information. Attaching two servos together produced a selsyn, where a remote servo's motions were accurately matched by another. Using a variety of mechanical or electrical systems, the output of the selsyns could be read to ensure proper movement had occurred (in other words, forming a closed-loop control system).
The first serious suggestion that selsyns could be used for machining control was made by Ernst F. W. Alexanderson, a Swedish immigrant to the U.S. working at General Electric (GE). Alexanderson had worked on the problem of torque amplification that allowed the small output of a mechanical computer to drive very large motors, which GE used as part of a larger gun laying system for US Navy ships. Like machining, gun laying requires very high accuracy - fractions of a degree - and the forces during the motion of the gun turrets was non-linear. In November 1931 Alexanderson suggested to the Industrial Engineering Department that the same systems could be used to drive the inputs of machine tools, allowing it to follow the outline of a template without the strong physical contact needed by existing tools like the Keller Machine. He stated that it was a "matter of straight engineering development".[2] However, the concept was ahead of its time from a business development perspective, and GE did not take the matter seriously until years later, when others had pioneered the field.
The birth of NC is generally credited to John T. Parsons,[3] a machinist and salesman at his father's machining company, Parsons Corp.
In 1942 he was told that helicopters were going to be the "next big thing" by the former head of Ford Trimotor production, Bill Stout. He called Sikorsky Aircraft to inquire about possible work, and soon got a contract to build the wooden stringers in the rotor blades. At the time, rotors were built in the same fashion as wings, consisting of a long tubular steel spar with stringers (or more accurately ribs) set on them to provide the aerodynamic shape that was then covered with a stressed skin. The stringers for the rotors were built from a design provided by Sikorsky, which was sent to Parsons as a series of 17 points defining the outline. Parsons then had to "fill in" the dots with a French curve to generate an outline. A wooden jig was built up to form the outside of the outline, and the pieces of wood forming the stringer were placed under pressure against the inside of the jig so they formed the proper curve. A series of trusswork members were then assembled inside this outline to provide strength.[4]
After setting up production at a disused furniture factory and ramping up production, one of the blades failed and it was traced to a problem in the spar. As at least some of the problem appeared to stem from spot welding a metal collar on the stringer to the metal spar. The collar was built into the stringer during construction, then slid onto the spar and welded in the proper position. Parsons suggested a new method of attaching the stringers directly to the spar using adhesives, never before tried on an aircraft design.[4]
That development led Parsons to consider the possibility of using stamped metal stringers instead of wood. These would not only be much stronger, but far easier to make as well, as they would eliminate the complex layup and glue and screw fastening on the wood. Duplicating this in a metal punch would require the wooden jig to be replaced by a metal cutting tool made of tool steel. Such a device would not be easy to produce given the complex outline. Looking for ideas, Parsons visited Wright Field to see Frank Stulen, the head of the Propeller Lab Rotary Ring Branch. During their conversation, Stulen concluded that Parsons didn't really know what he was talking about. Parsons realized Stulen had reached this conclusion, and hired him on the spot. Stulen started work on 1 April 1946 and hired three new engineers to join him.[4]
Stulen's brother worked at Curtis Wright Propeller, and mentioned that they were using punched card calculators for engineering calculations. Stulen decided to adopt the idea to run stress calculations on the rotors, the first detailed automated calculations on helicopter rotors.[4] When Parsons saw what Stulen was doing with the punched card machines, he asked Stulen if they could be used to generate an outline with 200 points instead of the 17 they were given, and offset each point by the radius of a mill cutting tool. If you cut at each of those points, it would produce a relatively accurate cutout of the stringer. This could cut the tool steel and then easily be filed down to a smooth template for stamping metal stringers.[4]
Stullen had no problem making such a program, and used it to produce large tables of numbers that would be taken onto the machine floor. Here, one operator read the numbers off the charts to two other operators, one on each of the X- and Y- axes. For each pair of numbers the operators would move the cutting head to the indicated spot and then lower the tool to make the cut.[4] This was called the "by-the-numbers method", or more technically, "plunge-cutting positioning".[5]
At that point Parsons conceived of a fully automated machine tool. With enough points on the outline, no manual working would be needed to clean it up. However, with manual operation the time saved by having the part more closely match the outline was offset by the time needed to move the controls. If the machine's inputs were attached directly to the card reader, this delay, and any associated manual errors, would be removed and the number of points could be dramatically increased. Such a machine could repeatedly punch out perfectly accurate templates on command. But at the time Parsons had no funds to develop his ideas.
When one of Parsons's salesmen was on a visit to Wright Field, he was told of the problems the newly formed US Air Force was having with new jet-powered designs. He asked if Parsons had anything to help them. Parsons showed Lockheed their idea of an automated mill, but they were uninterested. They decided to use 5-axis template copiers to produce the stringers, cutting from a metal template, and had already ordered the expensive cutting machine. But as Parsons noted:
Now just picture the situation for a minute. Lockheed had contracted to design a machine to make these wings. This machine had five axes of cutter movement, and each of these was tracer controlled using a template. Nobody was using my method of making templates, so just imagine what chance they were going to have of making an accurate airfoil shape with inaccurate templates.[4]
Parson's worries soon came true, and Lockheed's protests that they could fix the problem eventually rang hollow. In 1949 the Air Force arranged funding for Parsons to build his machines on his own.[4] Early work with Snyder Machine & Tool Corp proved the system of directly driving the controls from motors failed to give the accuracy needed to set the machine for a perfectly smooth cut. Since the mechanical controls did not respond in a linear fashion, you couldn't simply drive it with a given amount of power, because the differing forces meant the same amount of power would not always produce the same amount of motion in the controls. No matter how many points you included, the outline would still be rough.
This was not an impossible problem to solve, but would require some sort of feedback system, like a selsyn, to directly measure how far the controls had actually turned. Faced with the daunting task of building such a system, in the spring of 1949 Parsons turned to Gordon S. Brown's Servomechanisms Laboratory at MIT, which was a world leader in mechanical computing and feedback systems.[6] During the war the Lab had built a number of complex motor-driven devices like the motorized gun turret systems for the Boeing B-29 Superfortress and the automatic tracking system for the SCR-584 radar. They were naturally suited to technological transfer into a prototype of Parsons's automated "by-the-numbers" machine.
The MIT team was led by William Pease assisted by James McDonough. They quickly concluded that Parsons's design could be greatly improved; if the machine did not simply cut at points A and B, but instead moved smoothly between the points, then not only would it make a perfectly smooth cut, but could do so with many fewer points - the mill could cut lines directly instead of having to define a large number of cutting points to "simulate" a line. A three-way agreement was arranged between Parsons, MIT, and the Air Force, and the project officially ran from July 1949 to June 1950.[7] The contract called for the construction of two "Card-a-matic Milling Machines", a prototype and a production system. Both to be handed to Parsons for attachment to one of their mills in order to develop a deliverable system for cutting stringers.
Instead, in 1950 MIT bought a surplus Cincinnati Milling Machine Company "Hydro-Tel" mill of their own and arranged a new contract directly with the Air Force that froze Parsons out of further development.[4] Parsons would later comment that he "never dreamed that anybody as reputable as MIT would deliberately go ahead and take over my project."[4] In spite of the development being handed to MIT, Parsons filed for a patent on "Motor Controlled Apparatus for Positioning Machine Tool" on 5 May 1952, sparking a filing by MIT for a "Numerical Control Servo-System" on 14 August 1952. Parsons received US Patent 2,820,187 on 14 January 1958, and the company sold an exclusive license to Bendix. IBM, Fujitsu and General Electric all took sub-licenses after having already started development of their own devices.
MIT fit gears to the various handwheel inputs and drove them with roller chains connected to motors, one for each of the machine's three axes (X, Y, and Z). The associated controller consisted of five refrigerator-sized cabinets that, together, were almost as large as the mill they were connected to. Three of the cabinets contained the motor controllers, one controller for each motor, the other two the digital reading system.[1]
Unlike Parsons's original punched card design, the MIT design used standard 7-track punch tape for input. Three of the tracks were used to control the different axes of the machine, while the other four encoded various control information.[1] The tape was read in a cabinet that also housed six relay-based hardware registers, two for each axis. With every read operation the previously read point was copied into the "starting point" register, and the newly read one into the "ending point" register.[1] The tape was read continually and the number in the registers incremented with each hole encountered in their control track until a "stop" instruction was encountered, four holes in a line.
The final cabinet held a clock that sent pulses through the registers, compared them, and generated output pulses that interpolated between the points. For instance, if the points were far apart the output would have pulses with every clock cycle, whereas closely spaced points would only generate pulses after multiple clock cycles. The pulses are sent into a summing register in the motor controllers, counting up by the number of pulses every time they were received. The summing registers were connected to a digital to analog converter that increased power to the motors as the count in the registers increased, making the controls move faster.[1]
The registers were decremented by encoders attached to the motors and the mill itself, which would reduce the count by one for every one degree of rotation. Once the second point was reached the counter would hold a zero, the pulses from the clock would stop, and the motors would stop turning. Each 1 degree rotation of the controls produced a 0.0005 inch movement of the cutting head. The programmer could control the speed of the cut by selecting points that were closer together for slow movements, or further apart for rapid ones.[1]
The system was publicly demonstrated in September 1952, appearing in that month's Scientific American.[1] MIT's system was an outstanding success by any technical measure, quickly making any complex cut with extremely high accuracy that could not easily be duplicated by hand. However, the system was terribly complex, including 250 vacuum tubes, 175 relays and numerous moving parts, reducing its reliability in a production environment. It was also very expensive, the total bill presented to the Air Force was $360,000.14 ($2,641,727.63 in 2005 dollars).[8] Between 1952 and 1956 the system was used to mill a number of one-off designs for various aviation firms, in order to study their potential economic impact.[9]
The Air Force funding for the project ran out in 1953, but development was picked up by the Giddings and Lewis Machine Tool Co. In 1955 many of the MIT team left to form Concord Controls, a commercial NC company with Giddings' backing, producing the Numericord controller.[9] Numericord was similar to the MIT design, but replaced the punch tape with a magnetic tape reader that General Electric was working on. The tape contained a number of signals of different phases, which directly encoded the angle of the various controls. The tape was played at a constant speed in the controller, which set its half of the selsyn to the encoded angles while the remote side was attached to the machine controls. Designs were still encoded on paper tape, but the tapes were transferred to a reader/writer that converted them into magnetic form. The magtapes could then be used on any of the machines on the floor, where the controllers were greatly reduced in complexity. Developed to produce highly accurate dies for an aircraft skinning press, the Numericord "NC5" went into operation at G&L's plant at Fond du Lac, WI in 1955.[10]
Monarch Machine Tool also developed a numerical controlled lathe, starting in 1952. They demonstrated their machine at the 1955 Chicago Machine Tool Show (predecessor of today's IMTS), along with a number of other vendors with punched card or paper tape machines that were either fully developed or in prototype form. These included Kearney & Trecker’s Milwaukee-Matic II that could change its cutting tool under numerical control,[10] a common feature on modern machines.
A Boeing report noted that "numerical control has proved it can reduce costs, reduce lead times, improve quality, reduce tooling and increase productivity.”[10] In spite of these developments, and glowing reviews from the few users, uptake of NC was relatively slow. As Parsons later noted:
The NC concept was so strange to manufacturers, and so slow to catch on, that the US Army itself finally had to build 120 NC machines and lease them to various manufacturers to begin popularizing its use.[4]
In 1958 MIT published its report on the economics of NC. They concluded that the tools were competitive with human operators, but simply moved the time from the machining to the creation of the tapes. In Forces of Production, Noble[11] claims that this was the whole point as far as the Air Force was concerned; moving the process off of the highly unionized factory floor and into the un-unionized white collar design office. The cultural context of the early 1950s, a second Red Scare with a widespread fear of a bomber gap and of domestic subversion, sheds light on this interpretation. It was strongly feared that the West would lose the defense production race to the Communists, and that syndicalist power was a path toward losing, either by "getting too soft" (less output, greater unit expense) or even by Communist sympathy and subversion within unions (arising from their common theme of empowering the working class).
Many of the commands for the experimental parts were programmed "by hand" to produce the punch tapes that were used as input. During the development of Whirlwind, MIT's real-time computer, John Runyon coded a number of subroutines to produce these tapes under computer control. Users could enter a list of points and speeds, and the program would calculate the points needed and automatically generate the punch tape. In one instance, this process reduced the time required to produce the instruction list and mill the part from 8 hours to 15 minutes. This led to a proposal to the Air Force to produce a generalized "programming" language for numerical control, which was accepted in June 1956.[9]
Starting in September, Ross and Pople outlined a language for machine control that was based on points and lines, developing this over several years into the APT programming language. In 1957 the Aircraft Industries Association (AIA) and Air Material Command at Wright-Patterson Air Force Base joined with MIT to standardize this work and produce a fully computer-controlled NC system. On 25 February 1959 the combined team held a press conference showing the results, including a 3D machined aluminum ash tray that was handed out in the press kit.[9]
Meanwhile, Patrick Hanratty was making similar developments at GE as part of their partnership with G&L on the Numericord. His language, PRONTO, beat APT into commercial use when it was released in 1958.[12] Hanratty then went on to develop MICR magnetic ink characters that were used in cheque processing, before moving to General Motors to work on the groundbreaking DAC-1 CAD system.
APT was soon extended to include "real" curves in 2D-APT-II. With its release, MIT reduced its focus on NC as it moved into CAD experiments. APT development was picked up with the AIA in San Diego, and in 1962, by Illinois Institute of Technology Research. Work on making APT an international standard started in 1963 under USASI X3.4.7, but many manufacturers of NC machines had their own one-off additions (like PRONTO), so standardization was not completed until 1968, when there were 25 optional add-ins to the basic system.[9]
Just as APT was being released in the early 1960s, a second generation of lower-cost transistorized computers was hitting the market that were able to process much larger volumes of information in production settings. This reduced the cost of programming for NC machines and by the mid 1960s, APT runs accounted for a third of all computer time at large aviation firms.
While the Servomechanisms Lab was in the process of developing their first mill, in 1953, MIT's Mechanical Engineering Department dropped the requirement that undergraduates take courses in drawing. The instructors formerly teaching these programs were merged into the Design Division, where an informal discussion of computerized design started. Meanwhile the Electronic Systems Laboratory, the newly rechristened Servomechanisms Laboratory, had been discussing whether or not design would ever start with paper diagrams in the future.[13]
In January 1959, an informal meeting was held involving individuals from both the Electronic Systems Laboratory and the Mechanical Engineering Department's Design Division. Formal meetings followed in April and May, which resulted in the "Computer-Aided Design Project". In December 1959, the Air Force issued a one year contract to ESL for $223,000 to fund the project, including $20,800 earmarked for 104 hours of computer time at $200 per hour.[14] This proved to be far too little for the ambitious program they had in mind, although their engineering calculation system, AED, was released in March 1965.
In 1959, General Motors started an experimental project to digitize, store and print the many design sketches being generated in the various GM design departments. When the basic concept demonstrated that it could work, they started the DAC-1 project with IBM to develop a production version. One part of the DAC project was the direct conversion of paper diagrams into 3D models, which were then converted into APT commands and cut on milling machines. In November 1963 a design for the lid of a trunk moved from 2D paper sketch to 3D clay prototype for the first time.[15] With the exception of the initial sketch, the design-to-production loop had been closed.
Meanwhile, MIT's offsite Lincoln Labs was building computers to test new transistorized designs. The ultimate goal was essentially a transistorized Whirlwind known as TX-2, but in order to test various circuit designs a smaller version known as TX-0 was built first. When construction of TX-2 started, time in TX-0 freed up and this led to a number of experiments involving interactive input and use of the machine's CRT display for graphics. Further development of these concepts led to Ivan Sutherland's groundbreaking Sketchpad program on the TX-2.
Sutherland moved to the University of Utah after his Sketchpad work, but it inspired other MIT graduates to attempt the first true CAD system. It was Electronic Drafting Machine (EDM), sold to Control Data and known as "Digigraphics", which Lockheed used to build production parts for the C-5 Galaxy, the first example of an end-to-end CAD/CNC production system.
By 1970 there were a wide variety of CAD firms including Intergraph, Applicon, Computervision, Auto-trol Technology, UGS Corp. and others, as well as large vendors like CDC and IBM.
The price of computer cycles fell drastically during the 1960s with the widespread introduction of useful minicomputers. Eventually it became less expensive to handle the motor control and feedback with a computer program than it was with dedicated servo systems. Small computers were dedicated to a single mill, placing the entire process in a small box. PDP-8's and Data General Nova computers were common in these roles. The introduction of the microprocessor in the 1970s further reduced the cost of implementation, and today almost all CNC machines use some form of microprocessor to handle all operations.
The introduction of lower-cost CNC machines radically changed the manufacturing industry. Curves are as easy to cut as straight lines, complex 3-D structures are relatively easy to produce, and the number of machining steps that required human action have been dramatically reduced. With the increased automation of manufacturing processes with CNC machining, considerable improvements in consistency and quality have been achieved with no strain on the operator. CNC automation reduced the frequency of errors and provided CNC operators with time to perform additional tasks. CNC automation also allows for more flexibility in the way parts are held in the manufacturing process and the time required changing the machine to produce different components.
During the early 1970s the Western economies were mired in slow economic growth and rising employment costs, and NC machines started to become more attractive. The major U.S. vendors were slow to respond to the demand for machines suitable for lower-cost NC systems, and into this void stepped the Germans. In 1979, sales of German machines surpassed the U.S. designs for the first time. This cycle quickly repeated itself, and by 1980 Japan had taken a leadership position, U.S. sales dropping all the time. Once sitting in the #1 position in terms of sales on a top-ten chart consisting entirely of U.S. companies in 1971, by 1987 Cincinnati Milacron was in 8th place on a chart heavily dominated by Japanese firms.[16]
Many researchers have commented that the U.S. focus on high-end applications left them in an uncompetitive situation when the economic downturn in the early 1970s led to greatly increased demand for low-cost NC systems. Unlike the U.S. companies, who had focused on the highly profitable aerospace market, German and Japanese manufacturers targeted lower-profit segments from the start and were able to enter the low-cost markets much more easily.[16][17]
As computing and networking evolved, so did direct numerical control (DNC). Its long-term coexistence with less networked variants of NC and CNC is explained by the fact that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. This explains why machine tool models and tape storage media persist in grandfathered fashion even as the state of the art advances.
Recent developments in small scale CNC have been enabled, in large part, by the Enhanced Machine Controller project from the National Institute of Standards and Technology (NIST), an agency of the US Government's Department of Commerce. EMC is a public domain program operating under the Linux operating system and working on PC based hardware. After the NIST project ended, development continued, leading to EMC2 which is licensed under the GNU General Public License and Lesser GNU General Public License (GPL and LGPL). Derivations of the original EMC software have also led to several proprietary PC based programs notably TurboCNC, and Mach3, as well as embedded systems based on proprietary hardware. The availability of these PC based control programs has led to the development of DIY CNC, allowing hobbyists to build their own [18][19] using open source hardware designs. The same basic architecture has allowed manufacturers, such as Sherline and Taig, to produce turnkey lightweight desktop milling machines for hobbyists.
The easy availability of PC based software and support information of Mach3, written by Art Fenerty, lets anyone with some time and technical expertise make complex parts for home and prototype use. Fenerty is considered a principal founder of Windows-based PC CNC machining.[20]
Eventually, the homebrew architecture was fully commercialized and used to create larger machinery suitable for commercial and industrial applications. This class of equipment has been referred to as Personal CNC. Parallel to the evolution of personal computers, Personal CNC has its roots in EMC and PC based control, but has evolved to the point where it can replace larger conventional equipment in many instances. As with the Personal Computer, Personal CNC is characterized by equipment whose size, capabilities, and original sales price make it useful for individuals, and which is intended to be operated directly by an end user, often without professional training in CNC technology.
Although modern data storage techniques have moved on from punch tape in almost every other role, tapes are still relatively common in CNC systems. Several reasons explain this. One is easy backward compatibility of existing programs. Companies were spared the trouble of re-writing existing tapes into a new format. Another is the principle, mentioned earlier, that individual firms tend to stick with whatever is profitable, and their time and money for trying out alternatives is limited. A small firm that has found a profitable niche may keep older equipment in service for years because "if it ain't broke [profitability-wise], don't fix it." Competition places natural limits on that approach, as some amount of innovation and continuous improvement eventually becomes necessary, lest competitors be the ones who find the way to the "better mousetrap".
One change that was implemented fairly widely was the switch from paper to mylar tapes, which are much more mechanically robust. Floppy disks, USB flash drives and local area networking have replaced the tapes to some degree, especially in larger environments that are highly integrated.
The proliferation of CNC led to the need for new CNC standards that were not encumbered by licensing or particular design concepts, like APT. A number of different "standards" proliferated for a time, often based around vector graphics markup languages supported by plotters. One such standard has since become very common, the "G-code" that was originally used on Gerber Scientific plotters and then adapted for CNC use. The file format became so widely used that it has been embodied in an EIA standard. In turn, while G-code is the predominant language used by CNC machines today, there is a push to supplant it with STEP-NC, a system that was deliberately designed for CNC, rather than grown from an existing plotter standard.
While G-code is the most common method of programming, some machine-tool/control manufacturers also have invented their own proprietary "conversational" methods of programming, trying to make it easier to program simple parts and make set-up and modifications at the machine easier (such as Mazak's Mazatrol and Hurco). These have met with varying success.
A more recent advancement in CNC interpreters is support of logical commands, known as parametric programming (also known as macro programming). Parametric programs include both device commands as well as a control language similar to BASIC. The programmer can make if/then/else statements, loops, subprogram calls, perform various arithmetic, and manipulate variables to create a large degree of freedom within one program. An entire product line of different sizes can be programmed using logic and simple math to create and scale an entire range of parts, or create a stock part that can be scaled to any size a customer demands.
Since about 2006, the idea has been suggested and pursued to foster the convergence with CNC and DNC of several trends elsewhere in the world of information technology that have not yet much affected CNC and DNC. One of these trends is the combination of greater data collection (more sensors), greater and more automated data exchange (via building new, open industry-standard XML schemas), and data mining to yield a new level of business intelligence and workflow automation in manufacturing. Another of these trends is the emergence of widely published APIs together with the aforementioned open data standards to encourage an ecosystem of user-generated apps and mashups, which can be both open and commercial—in other words, taking the new IT culture of app marketplaces that began in web development and smartphone app development and spreading it to CNC, DNC, and the other factory automation systems that are networked with the CNC/DNC. MTConnect is a leading effort to bring these ideas into successful implementation.
Modern CNC mills differ little in concept from the original model built at MIT in 1952. Mills typically consist of a table that moves in the X and Y axes, and a tool spindle that moves in the Z (depth). The position of the tool is driven by motors through a series of step-down gears in order to provide highly accurate movements, or in modern designs, direct-drive stepper motor or servo motors. Open-loop control works as long as the forces are kept small enough and speeds are not too great. On commercial metalworking machines closed loop controls are standard and required in order to provide the accuracy, speed, and repeatability demanded.
As the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure, often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are completely electronically controlled.
CNC-like systems are now used for any process that can be described as a series of movements and operations. These include laser cutting, welding, friction stir welding, ultrasonic welding, flame and plasma cutting, bending, spinning, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing (PnP), and sawing.
In CNC, a "crash" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools, but may damage the part being machined so that it must be scrapped.
Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually "homed" or "zeroed" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it, and aren't really any sort of hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism. Many machines implement control parameters limiting axis motion past a certain limit in addition to physical limit switches. However, these parameters can often be changed by the operator.
Many CNC tools also don't know anything about their working environment. Machines may have load sensing systems on spindle and axis drives, but some do not. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the cutting process. Machines equipped with load sensors can stop axis or spindle movement in response to an overload condition, but this does not prevent a crash from occurring. It may only limit the damage resulting from the crash. Some crashes may not ever overload any axis or spindle drives.
If the drive system is weaker than the machine structural integrity, then the drive system simply pushes against the obstruction and the drive motors "slip in place". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X axis but is in fact at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by -178mm on the X axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself. This is common in open loop stepper systems, but is not possible in closed loop systems unless mechanical slippage between the motor and drive mechanism has occurred. Instead, in a closed loop system, the machine will continue to attempt to move against the load until either the drive motor goes into an overcurrent condition or a servo following error alarm is generated.
Collision detection and avoidance is possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most hobby CNC tools.
Instead, most hobby CNC tools simply rely on the assumed accuracy of stepper motors that rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never mis-steps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible.
Commercial CNC metalworking machines use closed loop feedback controls for axis movement. In a closed loop system, the control is aware of the actual position of the axis at all times. With proper control programming, this will reduce the possibility of a crash, but it is still up to the operator and tool path programmer to ensure that the machine is operated in a safe manner.
Within the numerical systems of CNC programming it is possible for the code generator to assume that the controlled mechanism is always perfectly accurate, or that accuracy tolerances are identical for all cutting or movement directions. This is not always a true condition of CNC tools.
CNC tools with a large amount of mechanical backlash can still be highly accurate if the drive or cutting mechanism is only driven so as to apply cutting force from one direction, and all driving systems are pressed tight together in that one cutting direction. However a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. Backlash also affects accuracy of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement.
The high backlash mechanism itself is not necessarily relied on to be repeatedly accurate for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero reference for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping a micrometer onto a reference beam and adjusting the Vernier dial to zero using that object as the reference.
Automotive Design & Production.
|
|